# Image-Text QA
Qwen2.5 VL 3B Instruct GPTQ Int4
Apache-2.0
This is the GPTQ-Int4 quantized version of the Qwen2.5-VL-3B-Instruct model, suitable for multimodal tasks involving image-to-text and text-to-text, supporting both Chinese and English.
Image-to-Text
Transformers Supports Multiple Languages

Q
hfl
1,312
2
Docowl2
Apache-2.0
mPLUG-DocOwl2 is an OCR-free multimodal large language model for multi-page document understanding, efficiently encoding document content via a high-resolution document compressor.
Image-to-Text
Safetensors English
D
mPLUG
482
99
Llama 3.2 11B Vision Instruct
Llama 3.2 is a multilingual, multimodal large language model released by Meta, supporting image-to-text and text-to-text conversion tasks with robust cross-modal understanding capabilities.
Text-to-Image
Transformers Supports Multiple Languages

L
meta-llama
784.19k
1,424
Featured Recommended AI Models